Computing 3D Geometry Directly from Range Images

نویسندگان

  • Sarah F. Frisken
  • Ronald N. Perry
چکیده

Several techniques have been developed in research and industry for computing 3D geometry from sets of aligned range images. Recent work has shown that volumetric methods are robust to scanner noise and alignment uncertainty and provide good quality, watertight models. However, these methods suffer from limited resolution, large memory requirements and long processing times, and they produce excessively large triangle models. In this report, we propose a new volumetric method for computing geometry from range data that: 1) computes distances directly from range images rather than from range surfaces, 2) generates an Adaptively Sampled Distance Field (ADF) rather than a distance volume or a 3-color octree, resulting in a significant savings in memory and distance computations, 3) provides an intuitive interface for manually correcting the generated ADF, and 4) generates optimal triangle models (with fewer triangles in flat regions and more triangles where needed to represent surface detail) from the generated ADF octree using a fast new triangulation method. Presented at SIGGRAPH 2001 Conference Abstracts and Applications. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories of Cambridge, Massachusetts; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories. All rights reserved. Copyright © MERL Mitsubishi Electric Research Laboratories, 2001 201 Broadway, Cambridge, Massachusetts 02139 Computing 3D Geometry Directly from Range Images Sarah F. Frisken and Ronald N. Perry Mitsubishi Electric Research Laboratory Introduction Several techniques have been developed in research and industry for computing 3D geometry from sets of aligned range images [1]. Recent work has shown that volumetric methods are robust to scanner noise and alignment uncertainty and provide good quality, water-tight models [2,3,4]. However, these methods suffer from limited resolution, large memory requirements and long processing times, and they produce excessively large triangle models. The methods of [2,3,4] construct range surfaces for each aligned range image and fill a (fixed resolution) volumetric representation with signed distances from the range surfaces. The methods use various approaches to reduce the time required to fill and access this volume data, including run length encoding of the distance values, binary encoding of regions outside a bounded region of the surface, and a 3color octree representation of the volume. The distance values from multiple scans are combined probabilistically using order-independent or incremental updating. Finally, these methods build a triangle model of the iso-surface of the distance volume using Marching Cubes. In this sketch, we propose a new volumetric method for computing geometry from range data that: 1) computes distances directly from range images rather than from range surfaces, 2) generates an Adaptively Sampled Distance Field (ADF) rather than a distance volume or a 3-color octree, resulting in a significant savings in memory and distance computations, 3) provides an intuitive interface for manually correcting the generated ADF, and 4) generates optimal triangle models (with fewer triangles in flat regions and more triangles where needed to represent surface detail) from the generated ADF octree using a fast new triangulation method. Corrected, Projected Distance Images Constructing 3D range surfaces and computing distances from these surfaces contribute significantly to the computational requirements of [2,3,4]. If, instead, the distance field could be generated directly from 2D range images, model generation times could be reduced. However, range images do not provide true distance data. In the simplest case, a range image records the perpendicular projected distance from the object surface to the image plane. The projected distance field is the same as the true distance field in two circumstances: 1) throughout the field for a planar surface parallel to the image plane, and 2) at the surface (where both distances are zero) for any surface. Other than case 1), the projected distance field differs from the true distance field for points off the surface, resulting in artifacts when combining projected distance fields from different viewpoints ([2] suffers from this problem). It can be shown mathematically for a planar surface that the difference between the true distance and the projected distance at a location x is inversely proportional to the magnitude of the distance field gradient at x when the gradient is computed using central differences. Here we propose to correct the 3D projected distance field by dividing sampled distances by the local gradient magnitude. This results in a better approximation of the true distance field near the surface, yielding better results when combining projected distance fields (see Figures 1 and 2). Computing the local 3D gradient to make this correction could be prohibitive (it requires 6 additional distance computations). Instead, we derive the 3D gradient from a 2D gradient image generated once during preprocessing, resulting in significantly faster generation. Since the range images of many scanning systems are not simple projected distances, they must be converted to this form. However, we have found that conversion is possible from many formats (e.g. laser striping requires a 1D scan-line conversion). While this conversion results in some loss of information from the scan, the benefit is faster performance, allowing interactive updating during data acquisition. Adaptively Sampled Distance Fields We recently proposed adaptively sampled distance fields (ADFs) as a new representation for shape [5]. ADFs adaptively sample the signed distance field of an object and store the sample values in a spatial hierarchy (e.g., an octree) for fast processing. ADFs are memory efficient and detail directed, so that distance values are computed from the range images only where needed (i.e. mostly near highly detailed regions of the surface). [5] found that even in 2D, ADFs require 20x fewer distance computations than a comparable 3 color octree representation (used in [4]). Finally, ADFs can be interactively edited via a sculpting interface [6] so that holes and other surface anomalies from occlusions and sensor noise can be easily corrected. The ADF is generated from sequential or order-independent range images using the tiled generator presented in [6]. Currently, distance values from the range images are combined as though carving the shape from a solid cube of material using a Boolean differencing operator; we have also begun experimenting with adding the probabilistic combining functions of [2,3,4] for robustness to sensor noise. Fast, Optimal Triangulation One of the advantages of merging range images in a volumetric distance field is that the field’s iso-surface yields a water-tight model. However, [2,3,4] rely on the Marching Cubes algorithm to triangulate the iso-surface which requires that all surface cells be at the same resolution, thus producing an excessive numbers of triangles. Instead, we take advantage of a new algorithm for triangulating the ADF octree [6], automatically generating fewer triangles in flat regions of the surface and more triangles where the surface has high detail. This method tends to generate an order of magnitude fewer triangles than Marching Cubes for range data, is very fast (generating models with more than 200,000 triangles in 0.37 seconds), and can produce level-of-detail models ideal for applications such as games and physical simulations.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

3D Modelling of In-Door Scenes Using Laser Range Sensing

This paper describes a 3D scene analysis system that is capable of modelling real-world scenes, based on data acquired by a Laser Range Finder on board of a mobile robot. Laser range images directly provide access to three dimensional information, as opposed to intensity images. This characteristic makes range data suited to build 3D models of objects, and in general real world scenes. Results ...

متن کامل

Real-Time Geometry Decompression on Graphics Hardware

Real-Time Computer Graphics focuses on generating images fast enough to cause the illusion of a continuous motion. It is used in science, engineering, computer games, image processing, and design. Special purpose graphics hardware, a so-called graphics processing unit (GPU), accelerates the image generation process substantially. erefore, GPUs have become indispensable tools for Real-Time Comp...

متن کامل

Refractive Plane Sweep for Underwater Images

In underwater imaging, refraction changes the geometry of image formation, causing the perspective camera model to be invalid. Hence, a systematic model error occurs when computing 3D models using the perspective camera model. This paper deals with the problem of computing dense depth maps of underwater scenes with explicit incorporation of refraction of light at the underwater housing. It is a...

متن کامل

Extracting Objects from Range and Radiance Images

In this paper we present a pipeline and several key techniques necessary for editing a real scene captured with both cameras and laser range scanners. We develop automatic algorithms to segment the geometry from range images into distinct surfaces, register texture from radiance images with the geometry, and synthesize compact high-quality texture maps. The result is an object-level representat...

متن کامل

3D acoustic image segmentation by a RANSAC-based approach

In this paper, a new technique for 3D acoustic image segmentation and modelling is proposed. Especially, in the underwater environment, in which optical sensors suffer from visibility problems, the acoustical devices may provide efficient solutions, but, on the other hand, acoustic image interpretation is surely more difficult for a human operator. The proposed application involves the use of a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2001